Moderate: Red Hat Storage Console 2 security and bug fix update

Related Vulnerabilities: CVE-2016-7062   CVE-2016-7062   CVE-2016-7062  

Synopsis

Moderate: Red Hat Storage Console 2 security and bug fix update

Type/Severity

Security Advisory: Moderate

Topic

An update is now available for Red Hat Storage Console 2 for Red Hat Enteprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Storage Console is a new Red Hat offering for storage administrators that provides a graphical management platform for Red Hat Ceph Storage 2. Red Hat Storage Console allows users to install, monitor, and manage a Red Hat Ceph Storage cluster.

Security Fix(es):

  • A flaw was found in the way authentication details were passed between rhscon-ceph and rhscon-core. An authenticated, local attacker could use this flaw to recover the cleartext password. (CVE-2016-7062)

Bug Fix(es):

  • Previously, the PG was calculated on per pool basis instead of cluster level. With this fix, automatic calculation of PGs is disabled and the Ceph PG calculator is used to calculate the PG values per OSD to keep the cluster in healthy state. (BZ#1366577, BZ#1375538)
  • Issuing a command to compact its data store during a rolling upgrade renders the Ceph monitors unresponsive. To avoid this behavior, skip the command to compact the data store during a rolling upgrade. As a result, the Ceph monitors are responsive.(BZ#1372481)
  • Rolling upgrade fails when a custom cluster name other than 'ceph' is used and causes the ceph-ansible play to abort. With this fix, include the flags to indicate the cluster name, defaulting to 'ceph' when unspecified and the Ansible playbook succeeds with custom cluster names.(BZ#1373919)
  • Previously, pools list in Console displayed incorrect storage utilization and capacity data due to multiple CRUSH hierarchies. With this fix, the pools list in Console displays the correct storage utilization and capacity data.(BZ#1358267)
  • Previously, the CPU utilization chart displayed only the user processes CPU utilization and omitted system CPU utilization. With this fix, the CPU utilization chart displays the combined user and system CPU utilization percentage.(BZ#1358461)
  • A full-duplex channel is available for communication in both directions simultaneously and hence the effective bandwidth is twice the actual bandwidth. With this update, this has been modified, and the network utilization is now calculated properly.(BZ#1366242)
  • In the Host list page, incorrect chart data was displayed in utilization charts. With this fix, the chart displays correct data. (BZ#1358270)
  • Previously, Calamari failed to reflect the correct values for OSD status. With this update, the issue has been fixed and the dashboard displays the correct, real time OSD status.(BZ#1359129)
  • Previously, the text on the Add Storage tab was confusing due to unclear description regarding the storage type. With this fix, the text has been updated and a short description about the pools and RBDs is provided to ensure there is no ambiguity.(BZ#1365983)
  • Previously, while importing a cluster with collocated journals, the journal size used to incorrectly populate in the MongoDB database. With this fix, the journal size and the journal path is displayed correctly in the OSD summary of the Host OSDs tab.(BZ#1365998)
  • Previously, the clusters list in the console incorrectly depicted IOPS in units. With this fix, all the IOPS units are removed to correctly show the IOPS in the numeric count.(BZ#1366048)
  • While checking the cluster system performance, the selection of any elapsed hour range inappropriately displayed tick marks on both the elapsed hour(s) range. With this fix, the console displays system performance graph with a tick mark only on the selected elapsed hour(s).(BZ#1366081)
  • The journal device details did not synchronize as expected during pool creation and importing cluster workflows. This behavior is now fixed to fetch the actual device details for OSD journals and displays as expected in the UI.(BZ#1342969)

All users of Red Hat Storage Console are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Storage Console 2 x86_64
  • Red Hat Storage Console Node 2 x86_64

Fixes

  • BZ - 1342969 - OSD journal details provides incorrect journal size
  • BZ - 1346379 - Command line parameters exposed (too spurious) as well as passwords shown
  • BZ - 1358267 - Wrong size and utilization of pool
  • BZ - 1358270 - cpu utilization charts on Host list dashboard doesn't match reported values
  • BZ - 1358461 - cpu utilization values reported by RHSC 2.0 are wrong
  • BZ - 1358832 - Enable mongodb authentication
  • BZ - 1359129 - Bad OSD status
  • BZ - 1365983 - [RFE]Very confusing "Add Storage" UI organization
  • BZ - 1365998 - Incoherent OSD journal size display in the UI
  • BZ - 1366048 - Cluster list window shows incorrect performance unit
  • BZ - 1366081 - Cluster Performance Graph Range Selection Popup Broken
  • BZ - 1366242 - Network utilization is not calculated properly
  • BZ - 1366577 - Wrong calculation of PGs peer OSD leads to cluster in HEALTH_WARN state with explanation "too many PGs per OSD (768 > max 300)"
  • BZ - 1366620 - Node initalization fails with "loop" type of disks on node
  • BZ - 1371496 - Network utilization doesn't work with SELinux in enforcing mode
  • BZ - 1371848 - Installation of ceph-installer failing on RHEL 7.3 because of conflicts with file from package firewalld-filesystem
  • BZ - 1372481 - [ceph-ansible] : rolling_update got hung in task 'compress the store as much as possible'
  • BZ - 1373919 - [ceph-ansible] : rolling update will fail if cluster name is other than 'ceph'
  • BZ - 1375538 - PG count for pool creation is hard set and calculated in a wrong way
  • BZ - 1375972 - when cluster is expanded (new machine added), console doesn't warn admin about implications of associated recovery operation
  • BZ - 1381681 - CVE-2016-7062 rhscon-ceph: password leak by command line parameter

CVEs

References